|
|
EEG - Event Related Potentials (ERP) Detection |
| Tags | detect☁eeg☁detect☁erp☁p300 |
Event Related Potentials (ERP)
can be triggered by either auditory, visual or somatosensory stimuli. One example is the
P300
, which is a positive deflection about 300 ms after an odd event.
The
Odd-Ball paradigm
Paradigm is the method to trigger the
P300
. The synchronisation of the EEG with the precise moment of stimuli start is important due to occurrence of evoked potentials between 100 and 900 ms after the onset of stimuli (
ref
).
This Jupyter Notebook explains how to detect a P300 using PLUX"s EEG single-channel sensor and a device for synchronisation of an acoustic stimuli (which was generated before) with the recorded data. Moreover the processing of the acquired EEG data is illustrated for the detection of the ERP for two test subjects.
Algorithm was tested for two subjects, however, to simplify the Jupyter Notebook , only data from Subject 1 is presented.
1 - Experimental Setup:
For more information on electrode positioning please refer to our
Notebook on Electrode Positioning
.
The Audio output cable (3) for connection with headphones as well as a computer and the
biosignalsplux
hub is available upon request.
|
|
|
| 1. | 2. | 3. |
2 - Processing of Data:
2.1 - Importing relevant packages
# biosignalsnotebooks python package
import biosignalsnotebooks as bsnb
# Scientific packages
from numpy import loadtxt, mean, array, concatenate, load, zeros
2.2 Load data from signal samples library
# Load of data Subject 1
relative_file_path = "../../signal_samples/eeg_acoustic_a.h5"
data, header = bsnb.load(relative_file_path, get_header=True)
2.3 Check File Header Data
2.4 Store information from file Header
#get information which is stored inside variables
ch1 = "CH1" # Channel1
ch5 = "CH5" # Channel5
sr = header["sampling rate"] # Sampling rate
resolution = header["resolution"][0] # Resolution (number of available bits)
device = header["device"]
2.5 - Store the desired data in an individual variable (for both subjects)
#RAW DATA
signal_acoustic = data[ch1]
signal_eeg = data[ch5]
2.6 - Convert the RAW data to values with a physical meaning (for the EEG in electric voltage | uV)
# Signal Samples Conversion
#EEG signal [Subject 1]:
signal_uv = bsnb.raw_to_phy("EEG", device, signal_eeg, resolution, option="uV") # Conversion to uV
#sound stimuli:
signal_ac = signal_acoustic - mean(signal_acoustic)
2.7 - Generate a time-axis
#EEG signal:
time_eeg = bsnb.generate_time(signal_uv)
#sound stimuli:
time_a = bsnb.generate_time(signal_acoustic)
2.8 - Plot Data of Subject 1 (our major example)
Considering that the following chart is intended to provide only a general overview of the data acquired, to decrease the memory demands a downsampling stage is applied.# [Subject 1]
# Downsampling data.
time_eeg_down = time_eeg[::10]
signal_uv_down = signal_uv[::10]
#EEG signal:
bsnb.plot([time_eeg_down], [signal_uv_down], y_axis_label="Electric Tension (uV)",legend="EEG RAW",x_range=(0,310))
# Downsampling data.
time_a_down = time_a[::10]
signal_ac_down = signal_ac[::10]
#sound stimuli:
bsnb.plot([time_a_down], [signal_ac_down], y_axis_label="Value RAW",legend="Acoustic Stimuli RAW",x_range=(0,310))
2.9 - Filtering:
A) Acquired Acoustic Stimuli
In first place, the position linked to the start of the stimuli needs to be defined in order to remove unwanted parts of the EEG signal. This can be detected visually in the plot of the raw signal. Assign the start to the EEG signal as well:
# [Subject 1]
#define index of beginning of stimuli:
sound_begin = 46000 #test subject 1 (visual inspection)
eeg_begin = sound_begin
Filter the acquired sound stimuli signal using a lowpass with a cutoff frequency of 440 Hz, remove the shift from the baseline and rectify the signal:
# [Subject 1]
#acoustic signal filtering
filter_sound = bsnb.lowpass(signal_acoustic, f=440, order=2,fs=sr)
base_sound = filter_sound-mean(filter_sound)
rect_sound = abs (base_sound [sound_begin:])
Smooth the signal using a specified smoothing level:
# Smoothing level [Size of sliding window]
smoothing_level_perc = 2 # Percentage.
smoothing_level = int((smoothing_level_perc / 100) * sr)
#Smooth the signal
smooth_sound = bsnb.smooth(rect_sound, smoothing_level, window='hanning')
Generate a new time axis for the signal:
time_sound = bsnb.generate_time(smooth_sound,sr)
A.1) Generating a Threshold to create a precise stimuli vector:
Define a threshold with a high and a low percentage value for the regular and the odd stimuli sound, respectively:
#Threshold percentage values:
thresh_1_p = 0.55 #regular sound max
thresh_2_p = 0.015 #onset odd and regular sound
thresh_3_p = 0.25 #regular sound min
Find the maximum value of the regular sound and define the corresponding thresholds:
# [Subject 1]
#find max and define thresholds:
max_1 = 0
for i in range(0,len(rect_sound),1):
if rect_sound[i] > max_1:
max_1 = rect_sound[i]
thresh_1 = thresh_1_p * max_1
thresh_2 = thresh_2_p * max_1
thresh_3 = thresh_3_p * max_1
print("sound_max:",max_1,"sound_thresh_onset",thresh_2,"sound_thresh_reg_max",thresh_1,"sound_thresh_reg_min",thresh_3)
Create a stimuli vector for the start index of each regular and odd tone:
#stimuli vector for start index of regular and odd tones
index_on=[]
index_on_Odd=[]
flag = -1500 #jump 1.5 sec
for index,i in enumerate (rect_sound):
#check the following thresholds and store index, jump 1.5 sec for next value check
if i > thresh_1 and index > 1500+flag:
index_on.append(index)
flag = index
#check the following thresholds and store index, jump 1.5 sec for next value check
elif i > thresh_2 and i < thresh_1 and rect_sound[index+10] < thresh_1 and rect_sound[index+15] < thresh_1 and rect_sound[index+20] < thresh_1 and index > 1500+flag:
index_on_Odd.append(index)
flag = index
The precision of the threshold method is ~2 ms. (visual inspection)
Visualise the stimuli onset: